1,988 research outputs found
The first close-up of the "flip-flop" phenomenon in a single star
We present temperature maps of the active late-type giant FK Com which
exhibit the first imagining record of the ``flip-flop'' phenomenon in a single
star. The phenomenon, in which the main part of the spot activity shifts 180
degrees in longitude, discovered a decade ago in FK Com, was reported later
also in a number of RS CVn binaries and a single young dwarf. With the surface
images obtained right before and after the ``flip-flop'', we clearly show that
the ``flip-flop'' phenomenon in FK Com is caused by changing the relative
strengths of the spot groups at the two active longitudes, with no actual spot
movements across the stellar surface, i.e. exactly as it happens in other
active stars.Comment: 4 pages, accepted by A&A Letter
Recommended from our members
Injecting Lexical Contrast into Word Vectors by Guiding Vector Space Specialisation
Word vector space specialisation models offer a portable, light-weight approach to fine-tuning arbitrary distributional vector spaces to discern between synonymy and antonymy. Their effectiveness is drawn from external linguistic constraints that specify the exact lexical relation between words. In this work, we show that a careful selection of the external constraints can steer and improve the specialisation. By simply selecting appropriate constraints, we report state-of-the-art results on a suite of tasks with well-defined benchmarks where modeling lexical contrast is crucial: 1) true semantic similarity, with highest reported scores on SimLex-999 and SimVerb-3500 to date; 2) detecting antonyms; and 3) distinguishing antonyms from synonyms
Magnetic Structure of Rapidly Rotating FK Comae-Type Coronae
We present a three-dimensional simulation of the corona of an FK Com-type
rapidly rotating G giant using a magnetohydrodynamic model that was originally
developed for the solar corona in order to capture the more realistic,
non-potential coronal structure. We drive the simulation with surface maps for
the radial magnetic field obtained from a stellar dynamo model of the FK Com
system. This enables us to obtain the coronal structure for different field
topologies representing different periods of time. We find that the corona of
such an FK Com-like star, including the large scale coronal loops, is dominated
by a strong toroidal component of the magnetic field. This is a result of part
of the field being dragged by the radial outflow, while the other part remains
attached to the rapidly rotating stellar surface. This tangling of the magnetic
field,in addition to a reduction in the radial flow component, leads to a
flattening of the gas density profile with distance in the inner part of the
corona. The three-dimensional simulation provides a global view of the coronal
structure. Some aspects of the results, such as the toroidal wrapping of the
magnetic field, should also be applicable to coronae on fast rotators in
general, which our study shows can be considerably different from the
well-studied and well-observed solar corona. Studying the global structure of
such coronae should also lead to a better understanding of their related
stellar processes, such as flares and coronal mass ejections, and in
particular, should lead to an improved understanding of mass and angular
momentum loss from such systems.Comment: Accepted to ApJ, 10 pages, 6 figure
A systematic study of leveraging subword information for learning word representations
The use of subword-level information (e.g., characters, character n-grams, morphemes) has become ubiquitous in modern word representation learning. Its importance is attested especially for morphologically rich languages which generate a large number of rare words. Despite a steadily increasing interest in such subword-informed word representations, their systematic comparative analysis across typologically diverse languages and different tasks is still missing. In this work, we deliver such a study focusing on the variation of two crucial components required for subword-level integration into word representation models: 1) segmentation of words into subword units, and 2) subword composition functions to obtain final word representations. We propose a general framework for learning subword-informed word representations that allows for easy experimentation with different segmentation and composition components, also including more advanced techniques based on position embeddings and self-attention. Using the unified framework, we run experiments over a large number of subword-informed word representation configurations (60 in total) on 3 tasks (general and rare word similarity, dependency parsing, fine-grained entity typing) for 5 languages representing 3 language types. Our main results clearly indicate that there is no ``one-size-fits-all'' configuration, as performance is both language- and task-dependent. We also show that configurations based on unsupervised segmentation (e.g., BPE, Morfessor) are sometimes comparable to or even outperform the ones based on supervised word segmentation
Investigating cross-lingual alignment methods for contextualized embeddings with Token-level evaluation
In this paper, we present a thorough investigation on methods that align pre-trained contextualized embeddings into shared cross-lingual context-aware embedding space, providing strong reference benchmarks for future context-aware crosslingual models. We propose a novel and challenging task, Bilingual Token-level Sense Retrieval (BTSR). It specifically evaluates the accurate alignment of words with the same meaning in cross-lingual non-parallel contexts, currently not evaluated by existing tasks such as Bilingual Contextual Word Similarity and Sentence Retrieval. We show how the proposed BTSR task highlights the merits of different alignment methods. In particular, we find that using context average type-level alignment is effective in transferring monolingual contextualized embeddings cross-lingually especially in non-parallel contexts, and at the same time improves the monolingual space. Furthermore, aligning independently trained models yields better performance than aligning multilingual embeddings with shared vocabulary.Peterhouse College Studentship; ERC Consolidator Grant LEXICA
Recommended from our members
Acquiring verb classes through bottom-up semantic verb clustering
In this paper, we present the first analysis of bottom-up manual semantic clustering of verbs in three languages, English, Polish and Croatian. Verb classes including syntactic and semantic information have been shown to support many NLP tasks by allowing abstraction from individual words and thereby alleviating data sparseness. The availability of such classifications is however still non-existent or limited in most languages. While a range of automatic verb classification approaches have been proposed, high-quality resources and gold standards are needed for evaluation and to improve the performance of NLP systems. We investigate whether semantic verb classes in three different languages can be reliably obtained from native speakers without linguistics training. The analysis of inter-annotator agreement shows an encouraging degree of overlap in the classifications produced for each language individually, as well as across all three languages. Comparative examination of the resultant classifications provides interesting insights into cross-linguistic semantic commonalities and patterns of ambiguity
Isomorphic Transfer of Syntactic Structures in Cross-Lingual NLP
The transfer or share of knowledge between languages is a popular solution to resource scarcity in NLP. However, the effectiveness of cross-lingual transfer can be challenged by variation in syntactic structures. Frameworks such as Universal Dependencies (UD) are designed to be cross-lingually consistent, but even in carefully designed resources trees representing equivalent sentences may not always overlap. In this paper, we measure cross-lingual syntactic variation, or anisomorphism, in the UD treebank collection, considering both morphological and structural properties. We show that reducing the level of anisomorphism yields consistent gains in cross-lingual transfer tasks. We introduce a source language selection procedure that facilitates effective cross-lingual parser transfer, and propose a typologically driven method for syntactic tree processing which reduces anisomorphism. Our results show the effectiveness of this method for both machine translation and cross-lingual sentence similarity, demonstrating the importance of syntactic structure compatibility for boosting cross-lingual transfer in NLP
Do we really need fully unsupervised cross-lingual embeddings?
Recent efforts in cross-lingual word embedding (CLWE) learning have predominantly focused on fully unsupervised approaches that project monolingual embeddings into a shared cross-lingual space without any cross-lingual signal. The lack of any supervision makes such approaches conceptually attractive. Yet, their only core difference from (weakly) supervised projection-based CLWE methods is in the way they obtain a seed dictionary used to initialize an iterative self-learning procedure. The fully unsupervised methods have arguably become more robust, and their primary use case is CLWE induction for pairs of resource-poor and distant languages. In this paper, we question the ability of even the most robust unsupervised CLWE approaches to induce meaningful CLWEs in these more challenging settings. A series of bilingual lexicon induction (BLI) experiments with 15 diverse languages (210 language pairs) show that fully unsupervised CLWE methods still fail for a large number of language pairs (e.g., they yield zero BLI performance for 87/210 pairs). Even when they succeed, they never surpass the performance of weakly supervised methods (seeded with 500-1,000 translation pairs) using the same self-learning procedure in any BLI setup, and the gaps are often substantial. These findings call for revisiting the main motivations behind fully unsupervised CLWE methods
Abundance analysis, spectral variability, and search for the presence of a magnetic field in the typical PGa star HD19400
The aim of this study is to carry out an abundance determination, to search
for spectral variability and for the presence of a weak magnetic field in the
typical PGa star HD19400. High-resolution, high signal-to-noise HARPS
spectropolarimetric observations of HD19400 were obtained at three different
epochs in 2011 and 2013. For the first time, we present abundances of various
elements determined using an ATLAS12 model, including the abundances of a
number of elements not analysed by previous studies, such as Ne I, Ga II, and
Xe II. Several lines of As II are also present in the spectra of HD19400. To
study the variability, we compared the behaviour of the line profiles of
various elements. We report on the first detection of anomalous shapes of line
profiles belonging to Mn and Hg, and the variability of the line profiles
belonging to the elements Hg, P, Mn, Fe, and Ga. We suggest that the
variability of the line profiles of these elements is caused by their
non-uniform surface distribution, similar to the presence of chemical spots
detected in HgMn stars. The search for the presence of a magnetic field was
carried out using the moment technique and the SVD method. Our measurements of
the magnetic field with the moment technique using 22 Mn II lines indicate the
potential existence of a weak variable longitudinal magnetic field on the first
epoch. The SVD method applied to the Mn II lines indicates =-76+-25G on
the first epoch, and at the same epoch the SVD analysis of the observations
using the Fe II lines shows =-91+-35G. The calculated false alarm
probability values, 0.008 and 0.003, respectively, are above the value 10^{-3},
indicating no detection.Comment: 13+6 pages, 14 figures, 6+1 tables, including the online-only
material, accepted for publication in MNRA
- …